Semi-Supervised Learning with Variational Bayesian Inference and Maximum Uncertainty Regularization
نویسندگان
چکیده
We propose two generic methods for improving semi-supervised learning (SSL). The first integrates weight perturbation (WP) into existing “consistency regularization” (CR) based methods. implement WP by leveraging variational Bayesian inference (VBI). second method proposes a novel consistency loss called “maximum uncertainty (MUR). While most losses act on perturbations in the vicinity of each data point, MUR actively searches “virtual” points situated beyond this region that cause uncertain class predictions. This allows to impose smoothness wider area input-output manifold. Our experiments show clear improvements classification errors various CR when they are combined with VBI or both.
منابع مشابه
Semi-supervised SRL System with Bayesian Inference
We propose a new approach to perform semi-supervised training of Semantic Role Labeling models with very few amount of initial labeled data. The proposed approach combines in a novel way supervised and unsupervised training, by forcing the supervised classifier to overgenerate potential semantic candidates, and then letting unsupervised inference choose the best ones. Hence, the supervised clas...
متن کاملSemi-Supervised Learning Based on Semiparametric Regularization
Semi-supervised learning plays an important role in the recent literature on machine learning and data mining and the developed semisupervised learning techniques have led to many data mining applications in recent years. This paper addresses the semi-supervised learning problem by developing a semiparametric regularization based approach, which attempts to discover the marginal distribution of...
متن کاملSemi-supervised Learning by Higher Order Regularization
In semi-supervised learning, at the limit of infinite unlabeled points while fixing labeled ones, the solutions of several graph Laplacian regularization based algorithms were shown by Nadler et al. (2009) to degenerate to constant functions with “spikes” at labeled points in R for d ≥ 2. These optimization problems all use the graph Laplacian regularizer as a common penalty term. In this paper...
متن کاملTransductive Inference and Semi-Supervised Learning
This chapter discusses the difference between transductive inference and semi-supervised learning. It argues that transductive inference captures the intrinsic properties of the mechanism for extracting additional information from the unla-beled data. It also shows an important role of transduction for creating noninductive models of inference. Let us start with the formal problem setting for t...
متن کاملA Brief Survey on Semi-supervised Learning with Graph Regularization
In this survey, we go over a few historical literatures on semi-supervised learning problems which apply graph regularization on both labled and unlabeled data to improve classification performance. These semi-supervised methods usually construct a nearest neighbour graph on instance space under certain measure function, and then work under the smoothness assumption that class labels of samples...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2021
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v35i8.16889